3,538 research outputs found
A Wavelet Visible Difference Predictor
In this paper, we describe a model of the human visual system (HVS) based on the wavelet transform. This model is largely based on a previously proposed model, but has a number of modifications that make it more amenable to potential integration into a wavelet based image compression scheme. These modifications include the use of a separable wavelet transform instead of the cortex transform, the application of a wavelet contrast sensitivity function (CSF), and a simplified definition of subband contrast that allows us to predict noise visibility directly from wavelet coefficients. Initially, we outline the luminance, frequency, and masking sensitivities of the HVS and discuss how these can be incorporated into the wavelet transform. We then outline a number of limitations of the wavelet transform as a model of the HVS, namely the lack of translational invariance and poor orientation sensitivity. In order to investigate the efficacy of this wavelet based model, a wavelet visible difference predictor (WVDP) is described. The WVDP is then used to predict visible differences between an original and compressed (or noisy) image. Results are presented to emphasize the limitations of commonly used measures of image quality and to demonstrate the performance of the WVDP. The paper concludes with suggestions on how the WVDP can be used to determine a visually optimal quantization strategy for wavelet coefficients and produce a quantitative measure of image quality
spBayes: An R Package for Univariate and Multivariate Hierarchical Point-referenced Spatial Models
Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude-longitude, Easting-Northing etc.) in a region of study. Such point-referenced or geostatistical data are often best analyzed with Bayesian hierarchical models. Unfortunately, fitting such models involves computationally intensive Markov chain Monte Carlo (MCMC) methods whose efficiency depends upon the specific problem at hand. This requires extensive coding on the part of the user and the situation is not helped by the lack of available software for such algorithms. Here, we introduce a statistical software package, spBayes, built upon the R statistical computing platform that implements a generalized template encompassing a wide variety of Gaussian spatial process models for univariate as well as multivariate point-referenced data. We discuss the algorithms behind our package and illustrate its use with a synthetic and real data example.
Model Agnostic Saliency for Weakly Supervised Lesion Detection from Breast DCE-MRI
There is a heated debate on how to interpret the decisions provided by deep
learning models (DLM), where the main approaches rely on the visualization of
salient regions to interpret the DLM classification process. However, these
approaches generally fail to satisfy three conditions for the problem of lesion
detection from medical images: 1) for images with lesions, all salient regions
should represent lesions, 2) for images containing no lesions, no salient
region should be produced,and 3) lesions are generally small with relatively
smooth borders. We propose a new model-agnostic paradigm to interpret DLM
classification decisions supported by a novel definition of saliency that
incorporates the conditions above. Our model-agnostic 1-class saliency detector
(MASD) is tested on weakly supervised breast lesion detection from DCE-MRI,
achieving state-of-the-art detection accuracy when compared to current
visualization methods
Pre and Post-hoc Diagnosis and Interpretation of Malignancy from Breast DCE-MRI
We propose a new method for breast cancer screening from DCE-MRI based on a
post-hoc approach that is trained using weakly annotated data (i.e., labels are
available only at the image level without any lesion delineation). Our proposed
post-hoc method automatically diagnosis the whole volume and, for positive
cases, it localizes the malignant lesions that led to such diagnosis.
Conversely, traditional approaches follow a pre-hoc approach that initially
localises suspicious areas that are subsequently classified to establish the
breast malignancy -- this approach is trained using strongly annotated data
(i.e., it needs a delineation and classification of all lesions in an image).
Another goal of this paper is to establish the advantages and disadvantages of
both approaches when applied to breast screening from DCE-MRI. Relying on
experiments on a breast DCE-MRI dataset that contains scans of 117 patients,
our results show that the post-hoc method is more accurate for diagnosing the
whole volume per patient, achieving an AUC of 0.91, while the pre-hoc method
achieves an AUC of 0.81. However, the performance for localising the malignant
lesions remains challenging for the post-hoc method due to the weakly labelled
dataset employed during training.Comment: Submitted to Medical Image Analysi
Automated 5-year Mortality Prediction using Deep Learning and Radiomics Features from Chest Computed Tomography
We propose new methods for the prediction of 5-year mortality in elderly
individuals using chest computed tomography (CT). The methods consist of a
classifier that performs this prediction using a set of features extracted from
the CT image and segmentation maps of multiple anatomic structures. We explore
two approaches: 1) a unified framework based on deep learning, where features
and classifier are automatically learned in a single optimisation process; and
2) a multi-stage framework based on the design and selection/extraction of
hand-crafted radiomics features, followed by the classifier learning process.
Experimental results, based on a dataset of 48 annotated chest CTs, show that
the deep learning model produces a mean 5-year mortality prediction accuracy of
68.5%, while radiomics produces a mean accuracy that varies between 56% to 66%
(depending on the feature selection/extraction method and classifier). The
successful development of the proposed models has the potential to make a
profound impact in preventive and personalised healthcare.Comment: 9 page
Effect of Urea and Distillers Inclusion in Dry- Rolled Corn Based Diets on Heifer Performance and Carcass Characteristics
Crossbred heifers (n=96, BW = 810 ± 20) were utilized to evaluate the effects of increasing wet distillers grains plus solubles and urea inclusion in a dry rolled corn based finishing diet on performance and carcass characteristics. Heifers were individually fed using a calan gate system with a 2 × 2 factorial arrangement of treatments. Factors included distillers inclusion at either 10 or 20% of diet DM and urea inclusion at either 0.2 or 1.4% of diet DM. Th ere was no difference for final body weight, average daily gain, and feed conversion on a live or carcass adjusted basis for either urea or distillers inclusion in the diet. Dry matter intake was reduced with increased urea inclusion; however, distillers inclusion did not influence intake. Added distillers and urea in the diet had minimal impact on performance suggesting supplemental urea in a dry rolled corn based finishing diets is of minimal benefit when feeding at least 10% distillers grains
Inductive Learning using Multiscale Classification
Multiscale Classification is a simple rule-based inductive learning algorithm. It can be applied to any N-dimensional real or binary classification problem to successively split the feature space in half to correctly classify the training data. The algorithm has several advantages over existing rule-based and neural network approaches: it is very simple, it learns very quickly, there is no network architecture to determine, there is an associated confidence with each classification rule, and noise can be automatically added to the training data to improve generalization
A One-Pass Extended Depth of Field Algorithm Based on the Over-Complete Discrete Wavelet Transform
In this paper we describe an algorithm for extended depth of field (EDF) imaging based on the over-complete discrete wavelet transform (OCDWT). We extend previous approaches by describing a, potentially real-time, algorithm that produces the EDF image after a single pass through the "stack" of focal plane images. In addition, we specifically study the effect of over-sampling on EDF reconstruction accuracy and show that a small degree of over-sampling considerably improves the quality of the EDF image
- …